skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Jarosz, Wojciech"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. This 3 hour course provides a detailed overview of grid-free Monte Carlo methods for solving partial differential equations (PDEs) based on the walk on spheres (WoS) algorithm, with a special emphasis on problems with high geometric complexity. PDEs are a basic building block of models and algorithms used throughout science, engineering and visual computing. Yet despite decades of research, conventional PDE solvers struggle to capture the immense geometric complexity of the natural world. A perennial challenge is spatial discretization: traditionally, one must partition the domain into a high-quality volumetric mesh—a process that can be brittle, memory intensive, and difficult to parallelize. WoS makes a radical departure from this approach, by reformulating the problem in terms of recursive integral equations that can be estimated using the Monte Carlo method, eliminating the need for spatial discretization. Since these equations strongly resemble those found in light transport theory, one can leverage deep knowledge from Monte Carlo rendering to develop new PDE solvers that share many of its advantages: no meshing, trivial parallelism, and the ability to evaluate the solution at any point without solving a global system of equations. The course is divided into two parts. Part I will cover the basics of using WoS to solve fundamental PDEs like the Poisson equation. Topics include formulating the solution as an integral equation, generating samples via recursive random walks, and employing accelerated distance and ray intersection queries to efficiently handle complex geometries. Participants will also gain experience setting up demo applications involving data interpolation, heat transfer, and geometric optimization using the open-source “Zombie” library, which implements various grid-free Monte Carlo PDE solvers. Part II will feature a mini-panel of academic and industry contributors covering advanced topics including variance reduction, differentiable and multi-physics simulation, and applications in industrial design and robust geometry processing. 
    more » « less
    Free, publicly-accessible full text available August 10, 2026
  2. Abstract We present a wave‐optics‐based BSDF for simulating the corona effect observed when viewing strong light sources through materials such as certain fabrics or glass surfaces with condensation. These visual phenomena arise from the interference of diffraction patterns caused by correlated, disordered arrangements of droplets or pores. Our method leverages the pair correlation function (PCF) to decouple the spatial relationships between scatterers from the diffraction behavior of individual scatterers. This two‐level decomposition allows us to derive a physically based BSDF that provides explicit control over both scatterer shape and spatial correlation. We also introduce a practical importance sampling strategy for integrating our BSDF within a Monte Carlo renderer. Our simulation results and real‐world comparisons demonstrate that the method can reliably reproduce the characteristics of the corona effects in various real‐world diffractive materials. 
    more » « less
  3. Optical heterodyne detection (OHD) employs coherent light and optical interference techniques (Fig. 1-(A)) to extract physical parameters, such as velocity or distance, which are encoded in the frequency modulation of the light. With its superior signal-to-noise ratio compared to incoherent detection methods, such as time-of-flight lidar, OHD has become integral to applications requiring high sensitivity, including autonomous navigation, atmospheric sensing, and biomedical velocimetry. However, current simulation tools for OHD focus narrowly on specific applications, relying on domain-specific settings like restricted reflection functions, scene configurations, or single-bounce assumptions, which limit their applicability. In this work, we introduce a flexible and general framework for spectral-domain simulation of OHD. We demonstrate that classical radiometry-based path integral formulation can be adapted and extended to simulate the OHD measurements in the spectral domain. This enables us to leverage the rich modeling and sampling capabilities of existing Monte Carlo path tracing techniques. Our formulation shares structural similarities with transient rendering but operates in the spectral domain and accounts for the Doppler effect (Fig. 1-(B)). While simulators for the Doppler effect in incoherent (intensity) detection methods exist, they are largely not suitable to simulate OHD. We use a microsurface interpretation to show that these two Doppler imaging techniques capture different physical quantities and thus need different simulation frameworks. We validate the correctness and predictive power of our simulation framework by qualitatively comparing the simulations with real-world captured data for three different OHD applications—FMCW lidar, blood flow velocimetry, and wind Doppler lidar (Fig. 1-(C)). 
    more » « less
    Free, publicly-accessible full text available August 1, 2026
  4. Zorin, Denis; Jarosz, Wojciech (Ed.)
  5. Burbano, Andres; Zorin, Denis; Jarosz, Wojciech (Ed.)
  6. Stochastic geometry models have enjoyed immense success in graphics for modeling interactions of light with complex phenomena such as participating media, rough surfaces, fibers, and more. Although each of these models operates on the same principle of replacing intricate geometry by a random process and deriving the average light transport across all instances thereof, they are each tailored to one specific application and are fundamentally distinct. Each type of stochastic geometry present in the scene is firmly encapsulated in its own appearance model, with its own statistics and light transport average, and no cross-talk between different models or deterministic and stochastic geometry is possible. In this paper, we derive a theory of light transport on stochastic implicit surfaces, a geometry model capable of expressing deterministic geometry, microfacet surfaces, participating media, and an exciting new continuum in between containing aggregate appearance, non-classical media, and more. Our model naturally supports spatial correlations, missing from most existing stochastic models. Our theory paves the way for tractable rendering of scenes in which all geometry is described by the same stochastic model, while leaving ample future work for developing efficient sampling and rendering algorithms. 
    more » « less
  7. The constellation of Earth-observing satellites continuously collects measurements of scattered radiance, which must be transformed into geophysical parameters in order to answer fundamental scientific questions about the Earth. Retrieval of these parameters requires highly flexible, accurate, and fast forward and inverse radiative transfer models. Existing forward models used by the remote sensing community are typically accurate and fast, but sacrifice flexibility by assuming the atmosphere or ocean is composed of plane-parallel layers. Monte Carlo forward models can handle more complex scenarios such as 3D spatial heterogeneity, but are relatively slower. We propose looking to the computer graphics community for inspiration to improve the statistical efficiency of Monte Carlo forward models and explore new approaches to inverse models for remote sensing. In Part 2 of this work, we demonstrate that Monte Carlo forward models in computer graphics are capable of sufficient accuracy for remote sensing by extending Mitsuba 3, a forward and inverse modeling framework recently developed in the computer graphics community, to simulate simple atmosphere-ocean systems and show that our framework is capable of achieving error on par with codes currently used by the remote sensing community on benchmark results. 
    more » « less
  8. The constellation of Earth-observing satellites continuously collects measurements of scattered radiance, which must be transformed into geophysical parameters in order to answer fundamental scientific questions about the Earth. Retrieval of these parameters requires highly flexible, accurate, and fast forward and inverse radiative transfer models. Existing forward models used by the remote sensing community are typically accurate and fast, but sacrifice flexibility by assuming the atmosphere or ocean is composed of plane-parallel layers. Monte Carlo forward models can handle more complex scenarios such as 3D spatial heterogeneity, but are relatively slower. We propose looking to the computer graphics community for inspiration to improve the statistical efficiency of Monte Carlo forward models and explore new approaches to inverse models for remote sensing. In Part 1 of this work, we examine the evolution of radiative transfer models in computer graphics and highlight recent advancements that have the potential to push forward models in remote sensing beyond their current periphery of realism. 
    more » « less
  9. We introduce Doppler time-of-flight (D-ToF) rendering, an extension of ToF rendering for dynamic scenes, with applications in simulating D-ToF cameras. D-ToF cameras use high-frequency modulation of illumination and exposure, and measure the Doppler frequency shift to compute the radial velocity of dynamic objects. The time-varying scene geometry and high-frequency modulation functions used in such cameras make it challenging to accurately and efficiently simulate their measurements with existing ToF rendering algorithms. We overcome these challenges in a twofold manner: To achieve accuracy, we derive path integral expressions for D-ToF measurements under global illumination and form unbiased Monte Carlo estimates of these integrals. To achieve efficiency, we develop a tailored time-path sampling technique that combines antithetic time sampling with correlated path sampling. We show experimentally that our sampling technique achieves up to two orders of magnitude lower variance compared to naive time-path sampling. We provide an open-source simulator that serves as a digital twin for D-ToF imaging systems, allowing imaging researchers, for the first time, to investigate the impact of modulation functions, material properties, and global illumination on D-ToF imaging performance. 
    more » « less
  10. Null-collision approaches for estimating transmittance and sampling free-flight distances are the current state-of-the-art for unbiased rendering of general heterogeneous participating media. However, null-collision approaches have a strict requirement for specifying a tightly bounding total extinction in order to remain both robust and performant; in practice this requirement restricts the use of null-collision techniques to only participating media where the density of the medium at every possible point in space is known a-priori. In production rendering, a common case is a medium in which density is defined by a black-box procedural function for which a bounding extinction cannot be determined beforehand. Typically in this case, a bounding extinction must be approximated by using an overly loose and therefore computationally inefficient conservative estimate. We present an analysis of how null-collision techniques degrade when a more aggressive initial guess for a bounding extinction underestimates the true maximum density and turns out to be non-bounding. We then build upon this analysis to arrive at two new techniques: first, a practical, efficient, consistent progressive algorithm that allows us to robustly adapt null-collision techniques for use with procedural media with unknown bounding extinctions, and second, a new importance sampling technique that improves ratio-tracking based on zero-variance sampling. 
    more » « less